A regularized limited-memory BFGS method for unconstrained minimization problems

نویسندگان

  • Shinji SUGIMOTO
  • Nobuo YAMASHITA
چکیده

The limited-memory BFGS (L-BFGS) algorithm is a popular method of solving large-scale unconstrained minimization problems. Since LBFGS conducts a line search with the Wolfe condition, it may require many function evaluations for ill-posed problems. To overcome this difficulty, we propose a method that combines L-BFGS with the regularized Newton method. The computational cost for a single iteration of the proposed method is the same as that of the original L-BFGS method. We show that the proposed method has global convergence under the usual conditions. Moreover, we present numerical results that show the robustness of the proposed method.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A limited memory adaptive trust-region approach for large-scale unconstrained optimization

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

متن کامل

New Quasi-Newton Optimization Methods for Machine Learning

This thesis develops new quasi-Newton optimization methods that exploit the wellstructured functional form of objective functions often encountered in machine learning, while still maintaining the solid foundation of the standard BFGS quasi-Newton method. In particular, our algorithms are tailored for two categories of machine learning problems: (1) regularized risk minimization problems with c...

متن کامل

Comparison of advanced large-scale minimization algorithms for the solution of inverse ill-posed problems

We compare the performance of several robust large-scale minimization algorithms for the unconstrained minimization of an ill-posed inverse problem. The parabolized Navier–Stokes equation model was used for adjoint parameter estimation. The methods compared consist of three versions of nonlinear conjugate-gradient (CG) method, quasiNewton Broyden–Fletcher–Goldfarb–Shanno (BFGS), the limited-mem...

متن کامل

An Algorithm for Unconstrained Quadratically Penalized Convex Optimization

A descent algorithm, “Quasi-Quadratic Minimization with Memory” (QQMM), is proposed for unconstrained minimization of the sum, F , of a non-negative convex function, V , and a quadratic form. Such problems come up in regularized estimation in machine learning and statistics. In addition to values of F , QQMM requires the (sub)gradient of V . Two features of QQMM help keep low the number of eval...

متن کامل

Comparison of advanced large-scale minimization algorithms for the solution of inverse ill-posed problems

We compare the performance of several robust large-scale minimization algorithms for the unconstrained minimization of an ill-posed inverse problem. The parabolized Navier-Stokes equations model was used for adjoint parameter estimation. The methods compared consist of two versions of the nonlinear conjugate gradient method (CG), Quasi-Newton (BFGS), the limited memory Quasi-Newton (L-BFGS) [15...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014